NCQA My HEDIS 2023 Validation Model

Antidepresant Medication Management

Author

John Ryan Kivela, MA

Published

June 27, 2023

Welcome (Bien Venido)

Hello,

This project is a model that compares Report Data with Real Data by comparing the Health Choice Value Based Purchasing Quality Roster (VBPQR)1 for Antidepressant Medication Management (AMM) with Health Choice adjudicated claims. This project is similar in many ways to the VBP Validation Model for Follow Up after Hospitalization (FUH)2.

The guidelines for antidepressant medication are far more nuanced3. Nonetheless we have endeavored to define the eligible population using adjudicated claims, measuring our Real Data against the VBP Quality Report Data.

The results were surprising! While our analysis of the FUH7 measure have repeatedly demonstrated an underrating of Alliance performance, our first look at AMM seems to show an overrating, not that there’s anything wrong with that.

We hope you enjoy reading this data story. We are always looking for new collaborators, so please reach out and share your ideas.

Sincerely,

Ryan

Image of overall here The Alliance Value Based Purchasing Validation Model for AMM

The thing about measuring VBP

So, there we were in the first months of the Alliance ACO business, and EVERY provider was concerned that the Value Based Purchasing Quality Roster was inaccurate. Our providers were concerned that their performance scores were underrated, not giving them the credit they deserved.

The Alliance wants its providers to be their best selves and to be recognized. Lack of confidence in the scoring mechanism crowds out enthusiasm and damages commitment to improvement.

So, in January 2023 we developed a validation model for the Follow Up after Hospitalization (FUH) HEDIS measure.

The VBP Validation Model FUH7 (2023): Key Takeaways:
  • Created a validated data model that can be used to identify eligible member events for FUH7
  • The core logic can be used to develop models for other HEDIS/NCQA measures.
  • The research affirmed that the providers were correct. They were indeed underrated for the 2022 measurement year and deserved a higher score by about 4 percentage points.

So what’s next?

That brings us to the new measurement year, January 2023 to December 2023. In this project the VBP Validation Model is going to be applied to the Antidepressant Medication Management (AMM) HEDIS My20234 performance measure.

This is the first time that anyone in the HCA network has done this assessment and we hope the results are illuminating.

This model compares Report Data from the VBP Quality Roster with Real Data from adjudicated claims.

The model will answer the following questions:

  • Do the results of the Value Based Purchasing Quality Roster (VBPQR) accurately reflect actual adjudicated claims?
  • Are the performance scores of Alliance Providers accurate and reliable?

The business objectives are twofold:

  • Build a data model for validating eligibility for the AMM eligible population
  • Determine impact of measurement on Alliance ACO performance scores

Why invest in this research?

Inaccurate measurement of VBP performance leads to an invalid assessment of the Alliance Providers’ delivery of services to their patients.

In previous years, Alliance providers were underrated on Follow Up after Hospitalization (FUH7) by at least 4 percentage points5.

Alliance Provider performance scores were underated (red blocks) by at least 4 percentage points in 2022

Underrating provider performance frustrates leadership and dampens the spirits of the clinical teams as they do this very difficult work.

Accurate measurement offers a better opportunity to identify patients in need, maximize financial incentives associated with performance, and support provider morale.

Setting up the experiment

This manuscript is written using Brisk-DM. Brisk-DM is a structure for doing data science that is based on CRISP-DM6, but tailored for executives and business leads. Brisk-DM uses an approachable communication style that is more understandable to professionals in the workplace.

The current state of things

At the time of this writing, The Alliance has received the first 3 months of VBP Quality Rosters the 2023 measurement year. Our application of the VBP Validation Model for FUH7 in June 2023 revealed a similar pattern to 2022 where we found that performance scores for FUH were underrated7.

The results for FUH are under further investigation while we develop this model for AMM.

Hablemos de dato (Let’s talk about the data)

This data model is simple in concept, but complex in design. The model compares Report Data from the HC VBP Quality Roster with Real Data from Health Choice adjudicated claims.

What are the data sources?

VBP Quality Roster :================================================================================================================================= - Health Choice receives this data from third party vendor, Cotivity

  • An excel workbook containing a roster of the members deemed eligible for VBP HEDIS NCQA measures including compliance status

  • The Alliance receives individual rosters for each Alliance Provider

=== Adjudicated Claims | :==================================================================================+ - A data set of adjudicated claims queried directly from the HCA data warehouse | | - Records are gathered from the Claims and PBM databases | | - Claims are extracted for all eligible service codes for the measurement year (8 individual files)

Is the data we use reliable?

VBP Quality Roster Adjudicated Claims
  • The outstanding question with the VBPQR is if the underlying data from the third party vendor Cotivity is accurate.

  • The report quality itself is very high as it is compiled by Health Choice Business Intelligence staff.

  • The underlying data is what is under investigation

  • Claims data is of the highest quality as it is compiled and reviewed extensively by Health Choice for its own business purposes.

  • The quality of this data is also reviewed by state and federal regulation entities, like AHCCCS

The AMM Eligibility Model

We now embark upon building a data set of eligible claims for the AMM measure. This process will be significantly more complicated than our previous model for FUH. The description of an eligible member event is very nuanced for AMM, but in short…

An eligible member event is defined as follows:
  • The individual is an adult, aged 18 or older, and
  • A prescription was filled for an eligible antidepressant medication within the “Intake Period” (Index Prescription Start Date (IPSD)), and
  • No other prescriptions were filled for eligible antidepressant medications within 105 days earlier than the IPSD (Test for Negative Medication History (NMH)), and
  • The individual had a service for at least one of the eligible Major Depressive Disorders (Test for Major Depressive Disorder (MDDx)) within 60 days (-60 days before, or +60 days after), and
  • The individual had continuous enrollment (Test for Continuous Enrollment (CE)) for
    • 105 days before the IPSD, and
    • 231 days after the IPSD

Check out the visualization of the AMM Eligibility Model below. There are many components of this model with overlapping time frames. We will describe these steps in further detail in the next section.

AMM Eligibility Model (Kivela, 2023)

You may also think of the criteria as a tumbling bike lock. All four criteria must line up, in the right order, for the lock to open. Once unlocked, the resulting data set es El Dato Verdad, the Real Data.

**AMM Eligibility: Primary criteria: | A tumbling bike lock representing the AMM Model | | - Index Prescription Start Date (IPSD) | | - Negative Medication History (NMH) | | - Major Depressive Diagnosis (MDDx) | | - Continuous Enrollment (CE) | |

All four criteria must line up, in the right order, for the lock to open. Once unlocked, the resulting data set es El Dato Verdad, the Real Data.

The Index Prescription Start Date

The first criteria that must be met is the Index Prescription Start Date (IPSD). This is the date that an individual filled a prescription for one of the HEDIS My 2023 eligible antidepressant medications.

The IPSD is the anchor point for the rest of the eligibility measures. An individual must have an eligible IPSD in order to be assessed against the other tests for this measure.

The member must have a prescription refill for an eligible Antidepressant Medication between the eligible dates indicated in the visualization above.

In order to assess IPSD we did the following:

The 12 month Intake Period begins on May 1 of the year prior to the Measurement Year

  • The Measurement Year is: 01-01-2023 to 12-31-2023

  • The Intake Period is: 05-01-2022 to 04-30-2023

  • We extract claims from the PBM database at the Health Choice data warehouse for the given My2023 eligible medications dispensed within the Intake Period.

  • The NDC to GPI crosswalk must be used to determine eligible medications. The code to create the crosswalk is in the Appendix.

  • The minimum date of the Test for Negative Medication History must also be included in the PBM Claims query. The test for NMH is outlined below.

  • The Negative Medication History range is: 01-16-222 to 01-15-2023

  • The prescription fill date in this range is called the Index Prescription Start Date (IPSD)

Here is the code to extract claims data from HC data warehouse

Code
# Declare Date Range
Declare @start as date = '01-16-2022'
Declare @end as date = '04-30-2023'

SELECT
  pbm.clientID,
  id.PrimaryId,
  pbm.AsOfDate,
  pbm.dtefilled,
  pbm.GpiNumber,
  pbm.GPIClassification,
  pbm.prodname,
  pbm.genericnme,
  pbm.LabelName,
  pbm.GroupId,
  pbm.preslstnme,
  pbm.pbmrxclaimnbr,
  pbm.claimsts,
  pbm.AmtPaidFinal,
  pbm.productid,
  pbm.decimalqty,
  pbm.dayssupply,
  pbm.Gender,
  pbm.birthdte,
  pbm.mbrage
  
FROM
  PBM.dbo.HCICPharmacyClaimSummary pbm
  LEFT OUTER JOIN GlobalMembers.dbo.ClientIdPlus id ON pbm.clientID = id.AzAhcccsId
  
WHERE
  pbm.dtefilled BETWEEN @start AND @end
    AND
    pbm.mbrage >= 18
    AND
    pbm.GpiNumber IN (concatenated_values_GPI)
Code
# Import the raw claims
PBMClaims <- read.csv("./data/DataRaw_PBMClaims_2023-05-31.csv")

# Convert numeric variable to date
PBMClaims$dtefilled <- as.Date(as.character(PBMClaims$dtefilled), format = "%Y%m%d")

# Convert character variable to date
PBMClaims$AsOfDate <- as.Date(PBMClaims$AsOfDate, format = "%m/%d/%Y")

# write.csv(PBMClaims, "./data/output/PBMClaims.csv")

PBMClaims_NMHTest <- PBMClaims

# Convert the 'dtefilled' column to Date type if it's not already in the correct format
PBMClaims_NMHTest$dtefilled <- as.Date(PBMClaims_NMHTest$dtefilled)

# Define the start and end dates for IPSDTestResult
ip_start_date <- as.Date("2022-05-01")
ip_end_date <- as.Date("2023-04-30")

# Create the IPSDTestResult column based on the date conditions
# A result of "TRUE" indicates a positive IPSD
PBMClaims_NMHTest$IPSDTestResult <- ifelse(
  PBMClaims_NMHTest$dtefilled >= ip_start_date 
  & PBMClaims_NMHTest$dtefilled <= ip_end_date, "TRUE", "FALSE")

Test for Negative Medication History (NMH)

The Test for Negative Medication History determines if an individual is a new recipient of an antidepressant medication. An individual passes this test if they have not had a prescription for an antidepressant medication within 105 days prior to the IPSD.

Be patient. If you are replicating this project, this part of the program takes about 20 minutes to run. If you think about what it is doing, it is comparing every row of claims (~147,000 rows) against the IPSD Intake Period range, and for all of the positive returns, comparing each one to each of the the original 147,000 rows for the NMH test.

Think about trying to guess the first 2 numbers on the bike lock, multiplied by 147,000. Anyway, it’s a lot. Go grab some coffee.

In order to conduct the test for Negative Medication History we took the following steps:

What in the world are we supposed to do with this!? Layers upon layers. Ok, here goes, take 2,

The PBM Claims data set is queried from the Health Choice data warehouse using the procedure outlined above. This initial pull includes the entire potential range of PBM Claims for the Intake Period and the Negative Medication History period. Refer again to the visualization above for a visual aid.

  1. Import the raw claims
  • The Negative Medication History range is: 01-16-222 to 01-15-2023

  • This data set is already filtered for the Min and Max possible ranges as outlined above.

  1. Change variable type to match for analysis (e.g. formatting as date)

  2. Conduct the Negative Medication History Test

    • If the fill date is within the eligible Intake Period (IPSD), and
    • If there are no other fills within 105 days before the fill date, then
    • The test confirms the Negative Medication History (TRUE), if not
    • The test denies the Negative Medication History (FALSE)

Be patient. This part of the program takes about 20 minutes to run. If you think about what it is doing, it is comparing every row (147,000 rows) against the IPSD range, and for all the positive returns comparing each one to each of the the original 147,000 rows for the NMH test. Think about trying to guess the first 2 numbers on the bike lock. Anyway, it’s a lot. Go grab some coffee.

This is the code my friends

Code
# Initialize NMHTestResult column with empty values
PBMClaims_NMHTest$NMHTestResult <- ""

# Loop through each row of the data frame
for (i in 1:nrow(PBMClaims_NMHTest)) {
  # Get the current ClientID, GpiNumber, and IPSDTest date
  current_client <- PBMClaims_NMHTest$clientID[i]
  current_gpi <- PBMClaims_NMHTest$GpiNumber[i]
  ipsd_test_date <- PBMClaims_NMHTest$dtefilled[i]
  
  # Find any matching observation with the same ClientID, GpiNumber, and a date less than 105 days before the IPSDTest date
  matching_observation <- PBMClaims_NMHTest$clientID[
    PBMClaims_NMHTest$clientID == current_client 
    & PBMClaims_NMHTest$GpiNumber == current_gpi 
    
## This is the problematic bit right here. Less than or greater than?
    & PBMClaims_NMHTest$dtefilled < ipsd_test_date - 105]
  
  # If there is no matching observation, update the NMHTestResult value for the current row
  if (length(matching_observation) != 0) {
    PBMClaims_NMHTest$NMHTestResult[i] <- "TRUE"
  } else {
    # If there is no matching observation, update the NMHTestResult value for the current row as "FALSE"
    PBMClaims_NMHTest$NMHTestResult[i] <- "FALSE"
  }
}
# write.csv(PBMClaims_NMHTest, "./data/output/PBMClaims_NMHTest.csv")
Fantastico!

Wow! I can’t believe that we actually pulled that off!

Now we have our confirmation of IPSD and NMH.

Two down, two to go. Awesome!

Next up is to determine if there was an eligible diagnosis

Test for Major Depressive Disorder

The next stop on the road is to confirm that the individual had a service for at least one of the eligible Major Depressive Disorders (MDDx) within 60 days (-60 days before, or +60 days after).

A diagnosis of Major Depressive Disorder is identified by cross referencing members with eligible IPSD and NMH (using PBM records), against behavioral health claims records for MDD. Individual identifying data is used to join PBM records with claims records, and make the comparison.

Needs Work

So we have to use BHClaims.

It’s going to be pull for the dates (refer to visual) .

AND you have to search for the qualifying MDDx, which comes from the HEDIS Value Set List, My2023[^_].

Then we have to get the claims from claims.dbo.shcavos. We use the techniqu as the FUH model. Same code excpet search for Dx instead of svccode.

Pull the list of MDDx diagnosis codes

Import the HEDIS My2023 value set for eligible Major Depressive Diagnoses
Code
# Import the Value Set List
ValueSetListMy2023 <- read.csv("./data/ValueSetListMy2023.csv")

# Create a list of the diagnosis codes
MDDxList <- ValueSetListMy2023$Code

# Concatenate for adding to sql
concatenated_values_MDDx <- paste0("(",    ValueSetListMy2023$Code, ")", collapse = ", ")
Query the Health Choice data warehouse for eligible MDDx claims
Code
-- Declare start and end variables
DECLARE @start DATE = '2022-03-22';
DECLARE @end DATE = '2023-10-30';

-- Check if the temporary table exists and drop it if it does
IF OBJECT_ID('tempdb..#ValueSetListMy2023') IS NOT NULL
    DROP TABLE #ValueSetListMy2023;

-- Create a temporary table
CREATE TABLE #ValueSetListMy2023 (Code VARCHAR(100) COLLATE SQL_Latin1_General_CP1_CI_AS);

-- Insert values into the temporary table
INSERT INTO #ValueSetListMy2023 (Code)
VALUES ('101'), ('100'), ('207'), ('116'), ('126'), ('136'), ('146'), ('156'), ('110'), ('120'), 
       ('130'), ('140'), ('150'), ('160'), ('170'), ('190'), ('200'), ('210'), ('1000'), 
       ('213'), ('214'), ('206'), ('202'), ('111'), ('121'), ('131'), ('141'), ('151'), 
       ('211'), ('171'), ('172'), ('173'), ('174'), ('122'), ('132'), ('142'), ('152'), 
       ('112'), ('117'), ('127'), ('137'), ('147'), ('157'), ('119'), ('129'), ('139'), 
       ('149'), ('159'), ('169'), ('219'), ('209'), ('179'), ('199'), ('113'), ('123'), 
       ('133'), ('143'), ('153'), ('203'), ('114'), ('124'), ('134'), ('144'), ('154'), 
       ('204'), ('212'), ('118'), ('128'), ('138'), ('148'), ('158'), ('1002'), ('1001'), 
       ('167'), ('164'), ('191'), ('192'), ('193'), ('194'), ('201'), ('208'), ('F32.0'), 
       ('F32.1'), ('F32.2'), ('F32.3'), ('F32.4'), ('F32.9'), ('F33.0'), ('F33.1'), ('F33.2'), 
       ('F33.3'), ('F33.41'), ('F33.9'), ('14183003'), ('2618002'), ('726772006'), 
       ('320751009'), ('36923009'), ('370143000'), ('1.08111E+16'), ('1.08112E+16'), 
       ('42925002'), ('69392006'), ('63778009'), ('25922000'), ('87512008'), ('79298009'), 
       ('1.6266E+16'), ('40379007'), ('720455008'), ('720454007'), ('720451004'), ('832007'), 
       ('15639000'), ('1.62668E+16'), ('18818009'), ('719592004'), ('720453001'), 
       ('720452006'), ('66344007'), ('38694004'), ('39809009'), ('319768000'), ('71336009'), 
       ('268621008'), ('191610000'), ('191611001'), ('191613003'), ('1.62646E+16'), 
       ('1.62649E+16'), ('1.62648E+16'), ('450714000'), ('73867007'), ('33736005'), 
       ('60099002'), ('75084000'), ('2.51E+11'), ('430852001'), ('77911002'), ('20250007'), 
       ('76441001'), ('1.6267E+16'), ('2.81E+11'), ('28475009'), ('33078009'), ('15193003'), 
       ('36474008'), ('191604000');

-- Query to retrieve data from the claims.dbo.shcavos table
SELECT DISTINCT
    shcavos.primaryID, 
    id.BCBSMedicaidId AS MemberID,
    shcavos.begDate,
    shcavos.PrimaryDiagnosis,
    shcavos.Dx1,
    shcavos.Dx2,
    shcavos.Dx3,
    shcavos.Dx4,
    shcavos.Dx5,
    shcavos.Dx6,
    shcavos.Dx7,
    shcavos.Dx8,
    shcavos.Dx9,
    shcavos.Dx10,
    shcavos.Dx11,
    shcavos.Dx12,
    CASE WHEN v.Code IS NOT NULL THEN 'True' ELSE 'False' END AS MatchFound
FROM claims.dbo.shcavos AS shcavos
LEFT JOIN GlobalMembers.dbo.ClientIdPlus id ON shcavos.primaryID = id.primaryID
LEFT JOIN #ValueSetListMy2023 AS v ON shcavos.PrimaryDiagnosis COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx1 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx2 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx3 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx4 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx5 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx6 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx7 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx8 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx9 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx10 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx11 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
                                    OR shcavos.Dx12 COLLATE SQL_Latin1_General_CP1_CI_AS = v.Code
WHERE shcavos.begDate BETWEEN @start AND @end
AND CASE WHEN v.Code IS NOT NULL THEN 'True' ELSE 'False' END = 'True';

# Save the result as MDDxClaims_dateslug.csv
Load data from the claims query results into the data model
Code
# Import the MDDxClaims from excel
MDDxClaims <- read_csv("./data/MDDxClaims_20230602.csv")
Conduct the MDDx Test
Code
# Load the MDDxClaims data
MDDxClaimsTest <- MDDxClaims %>%
  rename(PrimaryId = primaryID)

# Merge PBMClaims_NMHTest and MDDxClaims on PrimaryId, keeping all rows and columns
PBMClaims_MDDxMerge <- merge(x = PBMClaims_NMHTest, y = MDDxClaimsTest, by = "PrimaryId", all = TRUE)

PBMClaims_MDDxTest <- PBMClaims_MDDxMerge

# Create a new column called MDDxTestResult and conduct test
PBMClaims_MDDxTest$MDDxTestResult <- ifelse(abs(as.numeric(as.Date(PBMClaims_MDDxTest$begDate) - as.Date(PBMClaims_MDDxTest$dtefilled))) <= 60, TRUE, FALSE)

# Select the columns of interest, change name to _MDDXTest
PBMClaims_MDDxTest <- PBMClaims_MDDxTest[, c("PrimaryId", "clientID", "MemberID", "dtefilled", "IPSDTestResult", "NMHTestResult" , "MDDxTestResult", "begDate", "PrimaryDiagnosis", "GpiNumber", "LabelName", "preslstnme", "decimalqty", "dayssupply", "AmtPaidFinal")]

#write.csv(PBMClaims_MDDxTest, "./data/output/PBMClaims_MDDxTest.csv")
Woo hoo! It worked!

We have now determined that there is:

  1. A valid Index Prescription Start Date
  2. A Negative Medication History
  3. A valid service for Major Depressive Disorder

Test for Continuous Enrollment

Ok, three down, one to go. Awesome!

Now that we have confirmed IPSD, NMH, and MDDx, we will compare the member information against our enrollment records to determine there was an eligible enrollment period associated with the prescription fill.

An event is eligible if the individual had continuous enrollment for -105 days before the IPSD, AND +231 days after the IPSD.

The result is a big data solution, error, etc.

Ok, so what do we do here???

Take our main table, now called PBMClaims_MDDxTest. Compare the PrimaryId Associated with the event against our member enrollment roster developed in the Alliance Progress Report 8. This is the full global members roster because we are not filtering for active membership only. Individuals who are currently disenrolled are included as well.

First, confirm if the eligible event matches anyone on the enrollment roster.

If yes, the compare: 1. The IPSD against the enrollment date and confirm that enrollment date is at least 105 days before the IPSD 2. The IPSD against the disenrollment date and confirm that the disenrollment date is at least 213 days after the IPSD

Here’s the code

Code
# Import the original global members roster
GlobalMembers_orig <- read_xlsx("./data/data_original_glblmbrs_2023-05-01_globalMembersRoster.xlsx", sheet = "Sheet1")

# Create a working copy and rename "x" to PrimaryId
GlobalMembers <- GlobalMembers_orig |> 
  rename("PrimaryId" = x)

# Using the PBMClaims_MDDxTest table as a base, attach Global Members data by PrimaryId
PBMClaims_CETest1 <- merge(x = PBMClaims_MDDxTest,
              y = GlobalMembers,
              by = "PrimaryId",
              all = TRUE)

# Continuous Enrollment Test (Pre)

# Calculate the difference in days. If and only if bhhEffectiveDate is earlier than the dtefilled, then return the number of days between the bhhEffectiveDate and the dtefilled
PBMClaims_CETest1$CEDaysDiff_Pre <- ifelse(PBMClaims_CETest1$bhhEffectiveDate < PBMClaims_CETest1$dtefilled,
                                   as.integer(difftime(PBMClaims_CETest1$dtefilled, PBMClaims_CETest1$bhhEffectiveDate, units = "days")),
                                   NA)

# Create the DaysDiff_PreTest column
PBMClaims_CETest1$CEDaysDiff_PreTest <- PBMClaims_CETest1$CEDaysDiff_Pre > 105


# Continuous Enrollment Test (Post)

# Calculate the difference in days only if bhhEffectiveDate is before dtefilled
PBMClaims_CETest1$CEDaysDiff_Post <- ifelse(PBMClaims_CETest1$bhhEffectiveDate < PBMClaims_CETest1$dtefilled,
                                   as.integer(difftime(PBMClaims_CETest1$dtefilled, PBMClaims_CETest1$bhhEffectiveDate, units = "days")),
                                   NA)

# Get today's date
today <- Sys.Date()

# If disenrollment date is null (meaning still enrolled), then return the number of days between dtefilled and today. If the disenrollmentDate is not null (Meaning the person is disenrolled) then calculate the number of days between the dtefilled and the disenrollment date.

PBMClaims_CETest1$CEDaysDiff_Post <- ifelse(is.na(PBMClaims_CETest1$disenrollmentDate),
                                   as.integer(difftime(today, PBMClaims_CETest1$dtefilled, units = "days")),
                                   ifelse(PBMClaims_CETest1$disenrollmentDate > PBMClaims_CETest1$dtefilled,
                                          as.integer(difftime(PBMClaims_CETest1$disenrollmentDate, PBMClaims_CETest1$dtefilled, units = "days")),
                                          NA))

# Create the DaysDiff_PostTest column
PBMClaims_CETest1$CEDaysDiff_PostTest <- PBMClaims_CETest1$CEDaysDiff_Post > 231

# Conduct test to see if both pre and post conditions are met
# Create new column "CETestResult"
PBMClaims_CETest1$CETestResult <- ifelse(PBMClaims_CETest1$CEDaysDiff_PostTest & PBMClaims_CETest1$CEDaysDiff_PreTest, TRUE, FALSE)

# Select Choice variables:
PBMClaims_CE_Test <- PBMClaims_CETest1[, c("PrimaryId", "clientID", "MemberID", "dtefilled", "IPSDTestResult", "NMHTestResult", "MDDxTestResult", "CETestResult", "begDate", "PrimaryDiagnosis", "GpiNumber", "LabelName", "preslstnme", "decimalqty", "dayssupply", "CEDaysDiff_Pre", "CEDaysDiff_Post", "AmtPaidFinal")]

# Sort the data
PBMClaims_CE_Test <- PBMClaims_CE_Test %>%
  arrange(desc(PrimaryId), desc(dtefilled), IPSDTestResult)

#write.csv(PBMClaims_CE_Test, "./data/output/PBMClaims_CE_Test.csv")
Code
# Conduct the AMM Eligibility Test

# Convert the variables to logical if needed
PBMClaims_CE_Test$IPSDTestResult <- as.logical(PBMClaims_CE_Test$IPSDTestResult)
PBMClaims_CE_Test$NMHTestResult <- as.logical(PBMClaims_CE_Test$NMHTestResult)
PBMClaims_CE_Test$MDDxTestResult <- as.logical(PBMClaims_CE_Test$MDDxTestResult)
PBMClaims_CE_Test$CETestResult <- as.logical(PBMClaims_CE_Test$CETestResult)

# Create the new column "AMMEligibilityTest"
PBMClaims_CE_Test$AMMEligibilityTestResult <- with(PBMClaims_CE_Test, IPSDTestResult & NMHTestResult & MDDxTestResult & CETestResult)

# Select Choice variables:
AMMEligibilityTest <- PBMClaims_CE_Test[, c("PrimaryId", "clientID", "MemberID", "dtefilled", "AMMEligibilityTestResult", "IPSDTestResult", "NMHTestResult", "MDDxTestResult", "CETestResult", "begDate", "PrimaryDiagnosis", "GpiNumber", "LabelName", "preslstnme", "decimalqty", "dayssupply", "CEDaysDiff_Pre", "CEDaysDiff_Post", "AmtPaidFinal")]

# Sort the data
AMMEligibilityTest <- AMMEligibilityTest %>%
  arrange(desc(PrimaryId), desc(dtefilled), AMMEligibilityTestResult)

# write.csv(AMMEligibilityTest, "./data/output/AMMEligibilityTest.csv")

# write.csv(AMMEligibilityTest, "./data/output/AMMEligibilityTest_Copy.csv")
AMM Eligibility, Check!

We have now determined that there is:

  1. A valid Index Prescription Start Date
  2. A Negative Medication History
  3. A valid service for Major Depressive Disorder
  4. A valid period of enrollment

Value-based Purchasing Quality Reports (VBP QR)

Now that we have a reliable data set for the Real Data (adjudicated claims), we will introduce the Report Data (VBP Quality Roster). We will import and aggregate the Report Data, clean it up, and then compare it with Real Data.

The VBP Roster comes to us from Health Choice, but the underlying data is analyzed and produced by a third party vendor, Cotivity.

The Alliance receives a separate report for each of the Alliance Providers. This procedure outlines how the VBP QR is input from the original report and transformed to actionable data.

The resulting table is a complete, cleaned roster of all of the individual member events, for all of the VBP Measures, for all of the Alliance Providers. It is then filtered to assess each measure individually.

This table is used to construct the VBP Quality Report Dashboard as well as the Alliance Progress Report

The most recent Value-based Purchasing (VBP) Quality Roster (04-27-2023) for each Alliance Provider (AP) was gathered into the folder (./data/VBPReports/Quality). The Roster page from the excel data model was extracted from each of the individual reports and compiled into an aggregated data frame, DataRaw_VBPQR_AllAPsCombined, that contains the results of all APs.

The VBP Report data is cumulative over the VBP measurement year 01-01-2023 to 12-31-2023 and has a 60 day claims lag, such that the 04-27-2023 VBP QR contains claims adjudicated through 02-28-2023.

The Table, DataRaw_VBPQR_AllAPsCombined, is then transformed in order to remove superfluous white space and other text and table titles that were imported by default.

The key transformations are:

  • Remove superfluous rows that contain tables names and descriptions.

  • Promote data from the row with the column names [Row 6] into the column headers.

  • Complete the removal of superfluous rows by filtering the NA from the Gap Status variable.

  • Create a variable for Provider Shortnames.

  • Store the transformed data as VBPQR_AllAPsCombined_Cleaned

The data in the variable Health Home TIN & Name was used to create a new vector of names called Provider_Shortname, and then filtered to only include Alliance Providers (CBI, CPIH, EHS, LCBHC, MMHC, PH, SBHS, SHG, TGC).

Finally, the SubMeasureID variable was filtered to only include the AMM2 measure, and a list of the Member IDs was selected.

A copy of the file VBPQR_AllAPsCombined_Cleaned, was stored as VBPQR_AllAPsCombined_Cleaned2 for use later.

Code
# Import the unaltered VBP report, "Detail" sheet, as received from HCA
# 5/1/23 sheet = "Detail" was change by HCA to sheet = "Roster"
vbp_cbi   <-  read_xlsx("./data/VBPReports/Quality/vbpbhh_report_2023-05-31_94-2880847_Community_Bridges_HCA_BHH_VBP_Quality_Roster.xlsx", sheet = "Roster")
##  vbp_cbi <- vbp_cbi [,-1] 
##  colnames(vbp_cbi) <- c("BCBSAZ Health Choice" ,"...2", "...3", "...4", "...5", "...6", "...7", "...8", "...9")
vbp_cpih  <-  read_xlsx("./data/VBPReports/Quality/vbpbhh_report_2023-05-31_86-0215065_Change_Point_Integrated_Health_HCA_BHH_VBP_Quality_Roster.xlsx", sheet = "Roster")
vbp_lcbhc <-  read_xlsx("./data/VBPReports/Quality/vbpbhh_report_2023-05-31_86-0250938_Little_Colorado_Behavioral_Health_HCA_BHH_VBP_Quality_Roster.xlsx", sheet = "Roster")
vbp_mmhc  <-  read_xlsx("./data/VBPReports/Quality/vbpbhh_report_2023-05-31_86-0214457_Mohave_Mental_Health_HCA_BHH_VBP_Quality_Roster.xlsx", sheet = "Roster")
vbp_ph    <-  read_xlsx("./data/VBPReports/Quality/vbpbhh_report_2023-05-31_86-0206928_Polara_Health_HCA_BHH_VBP_Quality_Roster.xlsx", sheet = "Roster")
vbp_sbhs  <-  read_xlsx("./data/VBPReports/Quality/vbpbhh_report_2023-05-31_86-0290033_Southwest_Behavioral_Health_HCA_BHH_VBP_Quality_Roster.xlsx", sheet = "Roster")
vbp_shg   <-  read_xlsx("./data/VBPReports/Quality/vbpbhh_report_2023-05-31_86-0207499_Spectrum_Health_Group_HCA_BHH_VBP_Quality_Roster.xlsx", sheet = "Roster")
vbp_tgc   <-  read_xlsx("./data/VBPReports/Quality/vbpbhh_report_2023-05-31_86-0223720_The_Guidance_Center_HCA_BHH_VBP_Quality_Roster.xlsx", sheet = "Roster")

# Pro Tip
# if any of the tables pick up rogue columns...
# vbp_cbi <- vbp_cbi [,-1] 
# colnames(vbp_cbi) <- c("BCBSAZ Health Choice" ,"...2", "...3", "...4", "...5", "...6", "...7", "...8", "...9")

# Bind the Details sheet from all providers into one table
DataRaw_VBPQR_AllAPsCombined <- rbind(
  vbp_cbi,
  vbp_cpih,
  vbp_lcbhc,
  vbp_mmhc,
  vbp_ph,
  vbp_sbhs,
  vbp_shg,
  vbp_tgc
)

# # write to csv
# date of file = date of VBP QR report
# write.csv(DataRaw_VBPQR_AllAPsCombined, "./data/output/2023-05-31_DataRaw_VBPQR_AllAPsCombined.csv")
Code
# create a safe copy of the original data
VBPQR_AllAPsCombined_Cleaned <- DataRaw_VBPQR_AllAPsCombined

# Filter out superfluous rows of nonsense data
VBPQR_AllAPsCombined_Cleaned <- VBPQR_AllAPsCombined_Cleaned |>  
  filter(`...2` != "NA")

# Set column names to headers, which get imported on row 1 #5/2/23 - updated from "6" 
colnames(VBPQR_AllAPsCombined_Cleaned) <- VBPQR_AllAPsCombined_Cleaned [1,] 

# Remove the first row of data that headers
VBPQR_AllAPsCombined_Cleaned <- VBPQR_AllAPsCombined_Cleaned[-1,]

# 5/1/23 - Create SubMeasureID
VBPQR_AllAPsCombined_Cleaned$`SubMeasure ID` <- substr(VBPQR_AllAPsCombined_Cleaned$Measure, 1, 3)

# write.csv(VBPQR_AllAPsCombined_Cleaned, "./data/output/VBPQualityRoster.csv")
Code
# create a duplicate at this phase to be used in later evaluation
VBPQR_AllAPsCombined_Cleaned2 <- VBPQR_AllAPsCombined_Cleaned |> 
  filter(`SubMeasure ID` == "AMM")

# write.csv(VBPQR_AllAPsCombined_Cleaned2, "./data/output/VBPQR_AllAPsCombined_Cleaned2.csv")

# Isolate member ID for the validation
VBPQR_AllAPsCombined_Cleaned <- VBPQR_AllAPsCombined_Cleaned |> 
  filter(`SubMeasure ID` == "AMM") |> 
  select(`Member ID`)

# write.csv(VBPQR_AllAPsCombined_Cleaned, "./data/output/VBP_Validation.csv")

AMM Validation Data Modeling

The terms for the Validation Model for AMM are Conceptual and Logical. This model will help to define the systems surrounding measurement of AMM. This model also outlines a set of logical rules and structures of data to be used across agencies and measures.

This model extracts data from multiple VBP Quality Reports and aggregates it them into a single data set. It also collects adjudicated claims data from the Health Choice data warehouse.

The 2 data sets are then joined to create the Validation Matrix, comparing VBP cases against adjudicated claims.

The Validation Matrix is the table that will be used for evaluation! Phew! :)

Image of test design here The Alliance Value Based Purchasing Validation Model for AMM

Report Data: VBP Quality Roster Reports

The VBPQR_AllAPsCombined_Cleaned data frame was loaded to the test model. The data was summarized, per member, by counting the instances of eligibility per member. This creates a vector of unduplicated Member IDs called VBP_Unduplicated.

Real Data: Adjudicated Claims

The AMMEligibilityTest claims data frame was loaded to the test model. It was then summarized by counting the instances of HEDIS My2022 eligible claims per member. This creates a vector of unduplicated Member IDs called AMMClaims_Unduplicated.

Merge Data to One Table

The VBP_Unduplicated and the AMMClaims_Unduplicated were full outer joined, meaning that all rows of data from both variables are included, regardless of match, on the variable MemberID. The resulting data frame is called Validation_Matrix.

  • This table contains all of the unduplicated VBPQR MemberIDs, AND all of the unduplicated Claims MemberIDs.

The data is assessed for cases where a VBP MemberID is validated against a Claims MemberID. - A positive result is called “Match”, and a negative result is called “NoMatch”

The Validation Matrix

  • The Validation_Matrix was created by reducing the VBP Quality Roster and adjudicated claims to unduplicated observations of each MemberID, and then matching the 2 data sets by MemberID.

The Validation_Matrix is the table that will be used for evaluation! Phew! :)

Code
VBP_Unduplicated <- VBPQR_AllAPsCombined_Cleaned |> 
  group_by(`Member ID`) |> 
  rename("MemberID" = `Member ID`) |> 
  count()

# write.csv(VBP_Unduplicated, "./data/output/VBP_Unduplicated.csv")
Code
AMMClaims_Unduplicated <- AMMEligibilityTest |> 
  filter(MemberID != "NULL") |> 
  group_by(MemberID) |> 
  count()

# write.csv(AMMClaims_Unduplicated, "./data/output/AMMClaims_Unduplicated.csv")
Code
Validation_Matrix <- 
  merge(x = VBP_Unduplicated,
        y = AMMClaims_Unduplicated,
        by = "MemberID",
        all = TRUE) |> 
  rename("VBP" = n.x,
         "claims" = n.y) |> 
  mutate(Match = if_else((is.na(VBP) | is.na(claims)), "NoMatch", "Match"))

# write.csv(Validation_Matrix, "./data/output/ValidationMatrix.csv")

VBP Reports

  • 8 VBP report details sheets were imported, merged and cleaned, creating DataRaw_VBPQR_AllAPsCombined with 4424 observations of 10 variables.

  • VBPQR_AllAPsCombined_Cleaned was created from the master to isolate instances of FUH7, and then select for Member ID, ultimately yielding 1691 observations of MemberIDs.

Claims

  • ** Needs rewording for PBM claims and MDDx Claims with inlione code

**

Results

Validating Report Data using Real Data

The VBP_Unduplicated and the AMMClaims_Unduplicated were observed from left to right, meaning that all observations of data from VBP_Unduplicated are included, while Claims_Unduplicated are only included if matched with VBP_Unduplicated on the variable Member ID.

  • 1287 of the 16587 (0.0775909)VBP members were validated against an eligible AMM Claims member, with 15300 (0.9224091) not matched.

Analysis of impact on false eligible member events:

Overall Compliance of All Reported VBP QR Events (Comp_ChiSq):

Code
# Rename the member Id column
VBPQR_AllAPsCombined_Cleaned2 <- VBPQR_AllAPsCombined_Cleaned2 %>%
  rename(MemberID = `Member ID`)

# transform VBP_Rep_Comp for compliance
Compliance <-
  merge(x = Validation_Matrix,
        y = VBPQR_AllAPsCombined_Cleaned2,
        by = "MemberID",
        all.y = TRUE)

colnames(Compliance)[6] <- "Gap_Status"
#####
# write.csv(Compliance, "./data/output/Compliance.csv")

# write.csv(Compliance, "C:/Users/RyanK/OneDrive - The NARBHA Institute/Documents - Data Force/Projects/AllianceIntranetSupport/data/Compliance.csv")
  • 703 Non-Compliant

  • 988 Compliant

Compliance by VBP Members Matched with Claim:

  • 564: Matched, NonCompliant(MNC)

  • 723: Matched, Compliant (MC)

  • 139: NonMatched, NonCompliant (NMNC)

  • 265: NonMatched, Compliant (NMC)

Chi Squared Test

Q: What is the impact of the non-Matched member events?

Chi squared test was run at alpha = .05. The Pearson’s chi-squared test can be used to confirm if there is a significant difference between the expected result and the observed result in this comparison.

Code
# transform for Chi Squared test
Comp_ChiSq <- Compliance |> 
    select(Match, Gap_Status) |> 
  group_by(Match, Gap_Status) |>
  #mutate(Numerator = as.character(Numerator)) |> 
  count() |> 
  pivot_wider(names_from = "Gap_Status",
              values_from = "n")
  # mutate(`0` = as.double(`0`),
  #        `1` = as.double(`1`))

# write.csv(Comp_ChiSq, "./data/output/Comp_ChiSq.csv")

chisq.test(Comp_ChiSq[-1], correct = FALSE)

    Pearson's Chi-squared test

data:  Comp_ChiSq[-1]
X-squared = 11.226, df = 1, p-value = 0.0008068

There is a significant difference (X^2 (1, N = 1691) = 11.226, p < .001) between the Report Data and the Real Data.

This means that the validity of the HCA Value Based Purchasing Quality Roster (VBPQR) is not strong enough to draw inferences from the data.

Code
Compliance |> 
  select(Match, Gap_Status) |> 
  group_by(Match, Gap_Status) |>
  mutate(Numerator = if_else(Gap_Status == "OPEN", "NonCompliant", "Compliant")) |> 
  count() |> 
ggplot(aes(fill=Match, x=Gap_Status, y=n)) +
  geom_col() +
  scale_fill_manual(values = c(Match = "#5e0932", NoMatch = "#119ec0"))+
  labs(title = "VBP Report Member Event Validation",
       subtitle = "Distinct member validation against adjudicated claims",
       caption = "*From claims adjudicated through April 27, 2023") +
  ylab("Number of Distinct Members") +
  xlab("FUH7 Compliance Status")+
theme_get()+
  theme(
    axis.title.y = element_text(vjust = 2),
    plot.subtitle = element_text(face = "italic"),
    plot.caption = element_text(face = "bold.italic", hjust = 0, vjust = -1)
  )

Discussion

In this case the false claims are in our favor.

Appendix

Create a GPI to NDC crosswalk

Because HCA categorizes their PBM claims in terms of Generic Product Identifier (GPI), but NCQA does theirs in terms of National Drug Code (NDC), we have to crosswalk the eligible AMM medications from NDC to their corresponding GPI codes. Fortunately, it’s totally not a pain in the neck to backwards engineer this at all, lol.

The HEDIS My2023 Medication List Names identifies the eligible medications by providing the respective NDC numbers9. The AHCCCS Preferred Drug List is used to match NDC numbers to GPI numbers based on medication name10.

In order to create a cross walk that ties the NDC codes from NCQA to the GPI codes from Health Choice, we took the following steps:

  • Import the My2023 Medication to Code data from the My2023 Medication List Directory11
  • Filter the data to only include Antidepressant Medications
  • Rename some columns to provide consistency across tables
  • Import the AHCCCS Preferred Drug List 12. A side by side list of NDC and GPI codes is incredibly difficult to come by, and so we are using this list from 2019. Fortunately NDC and GPI are very stable.
  • Rename some columns to provide consistency across tables.
  • Filter Therapeutic Class for Antidepressant Other, Antidepressant SSRI
  • Merge the NCQA data set with the AHCCCS data set by matching on the the NDC variable.
Code
# Covert NDC to GPI

#import the My2023 Medications to Code data set
Med_To_NDC <- read.csv("./data/My2023MedicationToCode.csv")

Med_To_NDC_AMM <- Med_To_NDC |> 
  filter(Medication.List.Name == "Antidepressant Medications") |> 
  select(
    Medication.List.Name,
    Code,
    Generic.Product.Name
  )

colnames(Med_To_NDC_AMM)[colnames(Med_To_NDC_AMM) == "Code"] <- "NDC_NationalDrugCode"

colnames(Med_To_NDC_AMM)[colnames(Med_To_NDC_AMM) == "Medication.List.Name"] <- "MedicationList"

colnames(Med_To_NDC_AMM)[colnames(Med_To_NDC_AMM) == "Generic.Product.Name"] <- "GenericProductName"

# Import AHCCCS Preferred drug list
NDC_To_GPI <- read.csv("./data/AHCCCS_PreferredDrugListChangesFor_08012019.csv")

colnames(NDC_To_GPI)[colnames(NDC_To_GPI) == "Therapeutic.Class...Market.Basket"] <- "TherepeuticClass"

colnames(NDC_To_GPI)[colnames(NDC_To_GPI) == "National.Drug.Code..NDC..MediSpan"] <- "NDC_NationalDrugCode"

colnames(NDC_To_GPI)[colnames(NDC_To_GPI) == "MediSpan.Generic.Product.Indicator..GPI."] <- "GPI_GenericProductIdentifier"


NDC_To_GPI_AMM <- NDC_To_GPI |> 
  filter(c(TherepeuticClass == "ANTIDEPRESSANTS, OTHER" | TherepeuticClass ==  "ANTIDEPRESSANTS, SSRIs")) |> 
  select(NDC_NationalDrugCode,
         GPI_GenericProductIdentifier,
         TherepeuticClass)

# merge then filter for only AHCCCS preferred medications

# So these are all of the ones from NCQA that AHCCCS has on thier list. And now we have our crosswalk to address claims. 

My2023NDCtoGPICrosswalk <- merge(
  x = Med_To_NDC_AMM,
  y = NDC_To_GPI_AMM,
  by = "NDC_NationalDrugCode",
  all.y = TRUE
) |> 
  select(
    MedicationList,
    TherepeuticClass,
    NDC_NationalDrugCode,
    GPI_GenericProductIdentifier,
    GenericProductName
  ) |> 
  na.omit()

# write.csv(My2023NDCtoGPICrosswalk, "./data/output/My2023NDCtoGPICrosswalk.csv")

concatenated_values_GPI <- paste0("'",    My2023NDCtoGPICrosswalk$GPI_GenericProductIdentifier, "'", collapse = ", ")

Creation of AMM Member Follow Up List

::: panel-tabset

Description

Procedure

The Validation_Matrix was transformed and rejoined with IPClaims_Summary to create a table that will be used to generate Member Follow Up Lists. This new table is called MemberFollowUpList.

MemberFollowUpList is the table that is used for creating the Member Follow Up List for the AMM measure.

Code

Code
# Using the copy of VBPQR_AllAPsCombined_Cleaned we made above, rename MemberID to match
# VBPQR_AllAPsCombined_Cleaned2 <- VBPQR_AllAPsCombined_Cleaned2 |>
#   rename("MemberID" = `Member ID`)

# Rejoin the matched and non-matched MemberIDs to their claims data
MemberFollowUpList <- 
   merge(x = Validation_Matrix,
        y = AMMEligibilityTest,
        by = "MemberID",
        all = TRUE) |>
  drop_na(Match)

# write.csv(MemberFollowUpList,"./data/output/AMM_MemberFollowUpList.csv")

Evaluation of Validated Data

Words… the results were evaluated in many ways

Code
MemberFollowUpList |> 
  filter(!is.na(LabelName)) |>
  group_by(LabelName) |> 
  summarise(percent = n() / nrow(MemberFollowUpList) * 100) |>
  arrange(desc(percent)) |>
  head(10) |>
  kable()
LabelName percent
TRAZODONE TAB 100MG 10.147884
TRAZODONE TAB 50MG 9.075722
SERTRALINE TAB 100MG 8.360239
BUPROPN HCL TAB 150MG XL 5.874103
BUPROPN HCL TAB 300MG XL 5.655549
SERTRALINE TAB 50MG 5.306906
ESCITALOPRAM TAB 10MG 4.780297
FLUOXETINE CAP 20MG 4.741202
ESCITALOPRAM TAB 20MG 4.495436
FLUOXETINE CAP 40MG 3.756588
Code
MemberFollowUpList |>
  filter(!is.na(LabelName)) |>
  group_by(LabelName) |>
  summarise(percent = n() / nrow(MemberFollowUpList) * 100) |>
  filter(percent >= 1) |>
  arrange(LabelName) |>
  ggplot(aes(x = percent, y = reorder(LabelName, percent))) +
  geom_bar(stat = "identity", fill = "steelblue") +
  ylab(NULL) +
  xlab("Percentage of Total Prescription Fills") +
  ggtitle("Horizontal Bar Chart of Label Names with at least 1%") +
  theme_minimal() +
  theme(axis.text.y = element_text(angle = 0, hjust = 1, margin = margin(r = 5)),
        axis.title.y = element_blank(),
        plot.title = element_text(hjust = 1.0),
        axis.text.x = element_text(hjust = 0),
        axis.text.y.right = element_text(hjust = 0))

Code
MemberFollowUpList |>
  mutate(dtefilled = as.Date(dtefilled)) |>
  mutate(month = floor_date(dtefilled, "month")) |>
  filter(month >= as.Date("2022-04-01") & month <= as.Date("2023-03-31")) |>
  count(month) |>
  ggplot(aes(x = month, y = n)) +
  geom_line() +
  geom_vline(xintercept = as.Date("2022-10-01"), linetype = "dashed", color = "red") +
  annotate("text", x = as.Date("2022-10-01"), y = max(MemberFollowUpList$n), label = "End RBHA Contract", 
           vjust = -15.0, hjust = -0.025, color = "red") +
  scale_x_date(date_labels = "%b %Y", date_breaks = "1 month") +
  scale_y_continuous(labels = comma) +
  ylab("Number of Fills") +
  xlab(NULL) +
  ggtitle("Frequency of Antidepressant Medication Fills over Time") +
  theme_minimal() +
  theme(axis.text.x = element_text(angle = 45, hjust = 1))

Footnotes

  1. Health Choice Arizona. (2023). Value Based Purchasing Quality Roster↩︎

  2. Kivela, J.R. (2023). Value Based Purchasing Data Validation Model. The Northern Arizona Regional Behavioral Health Alliance.↩︎

  3. https://www.ncqa.org/hedis/↩︎

  4. https://www.ncqa.org/hedis/↩︎

  5. Kivela, J.R. (2023). Value Based Purchasing Data Validation Model. The Northern Arizona Regional Behavioral Health Alliance.↩︎

  6. CRISP-DM help overview. (August 17, 2021). https://www.ibm.com/docs/en/spss-modeler/saas?topic=dm-crisp-help-overview.↩︎

  7. HEDIS Criteria reference↩︎

  8. Alliance progress report (Kivela 2023)↩︎

  9. HEDIS Criteria reference↩︎

  10. HEDIS Criteria reference↩︎

  11. HEDIS Criteria reference↩︎

  12. HEDIS Criteria reference↩︎